Particle dynamics and multi-agent systems provide accurate dynamical models for studying and forecasting the behavior of complex interacting systems. They often take the form of a high-dimensional system of differential equations parameterized by an interaction kernel that models the underlying attractive or repulsive forces between agents. We consider the problem of constructing a data-based approximation of the interacting forces directly from noisy observations of the paths of the agents in time. The learned interaction kernels are then used to predict the agents behavior over a longer time interval. The approximation developed in this work uses a randomized feature algorithm and a sparse randomized feature approach. Sparsity-promoting regression provides a mechanism for pruning the randomly generated features which was observed to be beneficial when one has limited data, in particular, leading to less overfitting than other approaches. In addition, imposing sparsity reduces the kernel evaluation cost which significantly lowers the simulation cost for forecasting the multi-agent systems. Our method is applied to various examples, including first-order systems with homogeneous and heterogeneous interactions, second order homogeneous systems, and a new sheep swarming system.
translated by 谷歌翻译
The spectra of random feature matrices provide essential information on the conditioning of the linear system used in random feature regression problems and are thus connected to the consistency and generalization of random feature models. Random feature matrices are asymmetric rectangular nonlinear matrices depending on two input variables, the data and the weights, which can make their characterization challenging. We consider two settings for the two input variables, either both are random variables or one is a random variable and the other is well-separated, i.e. there is a minimum distance between points. With conditions on the dimension, the complexity ratio, and the sampling variance, we show that the singular values of these matrices concentrate near their full expectation and near one with high-probability. In particular, since the dimension depends only on the logarithm of the number of random weights or the number of data points, our complexity bounds can be achieved even in moderate dimensions for many practical setting. The theoretical results are verified with numerical experiments.
translated by 谷歌翻译
稀疏的缩小添加剂模型和稀疏随机特征模型作为学习低阶函数的方法分别开发,其中变量之间几乎没有相互作用,但既不提供计算效率。另一方面,$ \ ell_2 $基上的缩小添加剂模型是有效的,但不提供特征选择,因为产生的系数矢量密集。灵感来自迭代幅度修剪技术在寻找神经网络的彩票票时,我们提出了一种新方法 - 通过IMP(虾)稀疏随机特征模型 - 以有效地拟合具有固有的低维结构的高维数据稀疏可变依赖性的形式。我们的方法可以被视为组合过程来构建和找到两个层密度网络的稀疏彩票票。我们通过对阈值基础追踪的泛化误差和产生的界限进行精细分析来解释虾的观察到的益处。从综合性数据和现实世界基准数据集的功能近似实验,我们展示了与最先进的稀疏特征和添加方法(如SRFE-S,SSAM和Salsa)相比获得的虾优于或竞争性测试准确性。同时,虾以低计算复杂度执行特征选择,并且对修剪速率强大,表示所获得的子网结构中的稳健性。通过注意到我们的模型和重量/神经元子网之间的对应关系,我们通过虾深入了解彩票假设。
translated by 谷歌翻译
我们在随机特征矩阵的条件数上提供(高概率)界限。特别是,我们表明,如果复杂性比率$ \ frac {n} $ where $ n $是n $ with n $ wore $ n $是$ m $的数量,如$ \ log ^ {-1}( n)$或$ \ log(m)$,然后随机功能矩阵很好。该结果在没有正则化的情况下保持并且依赖于在随机特征矩阵的相关组件之间建立各种浓度界限。另外,我们在随机特征矩阵的受限等距常数上获得界限。我们证明了使用随机特征矩阵的回归问题相关的风险表现出双重下降现象,并且这是条件数的双缩小行为的效果。风险范围包括使用最小二乘问题的underParamedAimed设置和使用最小规范插值问题或稀疏回归问题的过次参数化设置。对于最小二乘或稀疏的回归案例,我们表明风险降低为$ M $和$ N $增加,即使在存在有限或随机噪声时也是如此。风险绑定与文献中的最佳缩放匹配,我们的结果中的常量是显式的,并且独立于数据的维度。
translated by 谷歌翻译
Because of their close relationship with humans, non-human apes (chimpanzees, bonobos, gorillas, orangutans, and gibbons, including siamangs) are of great scientific interest. The goal of understanding their complex behavior would be greatly advanced by the ability to perform video-based pose tracking. Tracking, however, requires high-quality annotated datasets of ape photographs. Here we present OpenApePose, a new public dataset of 71,868 photographs, annotated with 16 body landmarks, of six ape species in naturalistic contexts. We show that a standard deep net (HRNet-W48) trained on ape photos can reliably track out-of-sample ape photos better than networks trained on monkeys (specifically, the OpenMonkeyPose dataset) and on humans (COCO) can. This trained network can track apes almost as well as the other networks can track their respective taxa, and models trained without one of the six ape species can track the held out species better than the monkey and human models can. Ultimately, the results of our analyses highlight the importance of large specialized databases for animal tracking systems and confirm the utility of our new ape database.
translated by 谷歌翻译
Breast cancer is the second most common type of cancer in women in Canada and the United States, representing over 25% of all new female cancer cases. Neoadjuvant chemotherapy treatment has recently risen in usage as it may result in a patient having a pathologic complete response (pCR), and it can shrink inoperable breast cancer tumors prior to surgery so that the tumor becomes operable, but it is difficult to predict a patient's pathologic response to neoadjuvant chemotherapy. In this paper, we investigate the efficacy of leveraging learnt volumetric deep features from a newly introduced magnetic resonance imaging (MRI) modality called synthetic correlated diffusion imaging (CDI$^s$) for the purpose of pCR prediction. More specifically, we leverage a volumetric convolutional neural network to learn volumetric deep radiomic features from a pre-treatment cohort and construct a predictor based on the learnt features using the post-treatment response. As the first study to explore the utility of CDI$^s$ within a deep learning perspective for clinical decision support, we evaluated the proposed approach using the ACRIN-6698 study against those learnt using gold-standard imaging modalities, and found that the proposed approach can provide enhanced pCR prediction performance and thus may be a useful tool to aid oncologists in improving recommendation of treatment of patients. Subsequently, this approach to leverage volumetric deep radiomic features (which we name Cancer-Net BCa) can be further extended to other applications of CDI$^s$ in the cancer domain to further improve prediction performance.
translated by 谷歌翻译
数据集的质量在成功培训和部署深度学习模型中起着至关重要的作用。特别是在系统性能可能影响患者健康状况的医疗领域,干净的数据集是可靠预测的安全要求。因此,在构建自主临床决策系统时,离群值检测是一个必不可少的过程。在这项工作中,我们评估了自组织图对外离检测的适用性,专门针对包含白细胞定量相图像的医学数据集。我们根据量化误差和距离图检测和评估异常值。我们的发现证实了自组织地图对于手头数据集的无监督分布检测的适​​用性。根据专家领域知识,自组织地图与手动指定的过滤器相同。此外,它们在探索和清洁医疗数据集的工具方面显示了希望。作为未来研究的方向,我们建议将自组织地图和基于深度学习的特征提取的结合。
translated by 谷歌翻译
我们从一组稀疏的光谱时间序列中构建了一个物理参数化的概率自动编码器(PAE),以学习IA型超新星(SNE IA)的内在多样性。 PAE是一个两阶段的生成模型,由自动编码器(AE)组成,该模型在使用归一化流(NF)训练后概率地解释。我们证明,PAE学习了一个低维的潜在空间,该空间可捕获人口内存在的非线性特征范围,并且可以直接从数据直接从数据中准确地对整个波长和观察时间进行精确模拟SNE IA的光谱演化。通过引入相关性惩罚项和多阶段训练设置以及我们的物理参数化网络,我们表明可以在训练期间分离内在和外在的可变性模式,从而消除了需要进行额外标准化的其他模型。然后,我们在SNE IA的许多下游任务中使用PAE进行越来越精确的宇宙学分析,包括自动检测SN Outliers,与数据分布一致的样本的产生以及在存在噪音和不完整数据的情况下解决逆问题限制宇宙距离测量。我们发现,与以前的研究相一致的最佳固有模型参数数量似乎是三个,并表明我们可以用$ 0.091 \ pm 0.010 $ mag标准化SNE IA的测试样本,该样本对应于$ 0.074 \ pm。 0.010 $ mag如果删除了特殊的速度贡献。训练有素的模型和代码在\ href {https://github.com/georgestein/supaernova} {github.com/georgestein/supaernova}上发布
translated by 谷歌翻译
储层计算机(RCS)是所有神经网络训练最快的计算机之一,尤其是当它们与其他经常性神经网络进行比较时。 RC具有此优势,同时仍能很好地处理顺序数据。但是,由于该模型对其超参数(HPS)的敏感性,RC的采用率滞后于其他神经网络模型。文献中缺少一个自动调谐这些参数的现代统一软件包。手动调整这些数字非常困难,传统网格搜索方法的成本呈指数增长,随着所考虑的HP数量,劝阻RC的使用并限制了可以设计的RC模型的复杂性。我们通过引入RCTORCH来解决这些问题,Rctorch是一种基于Pytorch的RC神经网络软件包,具有自动HP调整。在本文中,我们通过使用它来预测不同力的驱动摆的复杂动力学来证明rctorch的实用性。这项工作包括编码示例。示例Python Jupyter笔记本可以在我们的GitHub存储库https://github.com/blindedjoy/rctorch上找到,可以在https://rctorch.readthedocs.io/上找到文档。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译